Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Scrambling and hiding algorithm of streaming media image information based on state view
YANG Panpan, ZHAO Jichun
Journal of Computer Applications    2021, 41 (6): 1729-1733.   DOI: 10.11772/j.issn.1001-9081.2020091422
Abstract240)      PDF (840KB)(308)       Save
Aiming at the information security risks of streaming media images, a new scrambling and hiding algorithm for streaming media image information based on state view was proposed. Firstly, the streaming media image enhancement algorithm based on Neighborhood Limited Empirical Mode Decomposition (NLEMD) was used to enhance the streaming media image and highlight the details of the streaming media image, so as to realize the effect of streaming media image enhancement. Then, the efficient encoding and decoding algorithm based on state view was used to encode and decode the streaming media image information, so that the streaming media image information was scrambled and hidden. Experimental results show that, the proposed algorithm can effectively and comprehensively scramble and hide plant and text streaming media image information, and it can significantly enhance the streaming media images. In the scrambling and hiding of streaming media image information, the scrambling and hiding degree of the proposed algorithm is higher than 95%, which indicates that the proposed algorithm can protect the security of streaming media image information.
Reference | Related Articles | Metrics
Wind turbine fault sampling algorithm based on improved BSMOTE and sequential characteristics
YANG Xian, ZHAO Jisheng, QIANG Baohua, MI Luzhong, PENG Bo, TANG Chenghua, LI Baolian
Journal of Computer Applications    2021, 41 (6): 1673-1678.   DOI: 10.11772/j.issn.1001-9081.2020091384
Abstract280)      PDF (1063KB)(457)       Save
To solve the imbalance problem of wind turbine dataset, a Borderline Synthetic Minority Oversampling Technique-Sequence (BSMOTE-Sequence) sampling algorithm was proposed. In the algorithm, when synthesizing new samples, the space and time characteristics were considered comprehensively, and the new samples were cleaned, so as to effectively reduce the generation of noise points. Firstly, the minority class samples were divided into security class samples, boundary class samples and noise class samples according to the class proportion of the nearest neighbor samples of each minority class sample. Secondly, for each boundary class sample, the minority class sample set with the closest spatial distance and time span was selected, the new samples were synthesized by linear interpolation method, and the noise class samples and the overlapping samples between classes were filtered out. Finally, Support Vector Machine (SVM), Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) were used as the fault detection models of wind turbine gear box, and F1-Score, Area Under Curve (AUC) and G-mean were used as performance evaluation indices of the models, and the proposed algorithm was compared with other sampling algorithms on real wind turbine datasets. Experimental results show that, compared with those of the existing algorithms, the classification effect of the samples generated by BSMOTE-Sequence algorithm is better with an average increase of 3% in F1-Score, AUC and G-mean of the detection models. The proposed algorithm can be effectively applicable to the field of wind turbine fault detection where the data with sequential rule is imbalanced.
Reference | Related Articles | Metrics
Overview of blockchain consensus mechanism for internet of things
TIAN Zhihong, ZHAO Jindong
Journal of Computer Applications    2021, 41 (4): 917-929.   DOI: 10.11772/j.issn.1001-9081.2020111722
Abstract1418)      PDF (1143KB)(2092)       Save
With the continuous development of digital currency, the blockchain technology has attracted more and more attention, and the research on its key technology, consensus mechanism, is particularly important. The application of blockchain technology in the Internet of Things(IoT) is one of the hot issues. Consensus mechanism is one of the core technologies of blockchain, which has an important impact on IoT in terms of decentralization degree, transaction processing speed, transaction confirmation delay, security, and scalability.Firstly, the architecture characteristics of IoT and the lightweight problem caused by resource limitation were described, the problems faced in the implementation of the blockchain in IoT were briefly summarized, and the demands of blockchain in IoT were analyzed by combining the operation flow of bitcoin. Secondly, the consensus mechanisms were divided into proof class, Byzantine class and Directed Acyclic Graph(DAG) class, and the working principles of these various classes of consensus mechanisms were studied, their adaptabilities to IoT were analyzed in terms of communication complexity, their advantages and disadvantages were summarized, and the combination architectures of the existing consensus mechanisms and IoT were investigated and analyzed. Finally, the problems of IoT, such as high operating cost, poor scalability and security risks were deeply studied, the analysis results show that the Internet of Things Application(IOTA) and Byteball consensus mechanisms based on DAG technology have the advantages of fast transaction processing speed, good scalability and strong security in the case of having a large number of transactions, and they are the development directions of blockchain consensus mechanism in the field of IoT in the future.
Reference | Related Articles | Metrics
Work location inference method with big data of urban traffic surveillance
CHEN Kai, YU Yanwei, ZHAO Jindong, SONG Peng
Journal of Computer Applications    2021, 41 (1): 177-184.   DOI: 10.11772/j.issn.1001-9081.2020060937
Abstract406)      PDF (1377KB)(448)       Save
Inferring work locations for users based on spatiotemporal data is important for real-world applications ranging from product recommendation, precise marketing, transportation scheduling to city planning. However, the problem of location inference based on urban surveillance data has not been explored. Therefore, a work location inference method was proposed for vehicle owners based on the data of traffic surveillance with sparse cameras. First, the urban traffic periphery data such as road networks and Point Of Interests (POIs) were collected, and the preprocessing method of road network matching was used to obtain a real road network with rich semantic information such as POIs and cameras. Second, the important parking areas, which mean the candidate work areas for the vehicles were obtained by clustering Origin-Destination (O-D) pairs extracted from vehicle trajectories. Third, using the constraint of the proposed in/out visiting time pattern, the most likely work area was selected from multiple area candidates. Finally, by using the obtained road network and the distribution of POIs in the road network, the vehicle's reachable POIs were extracted to further narrow the range of work location. The effectiveness of the proposed method was demonstrated by comprehensive experimental evaluations and case studies on a real-world traffic surveillance dataset of a provincial capital city.
Reference | Related Articles | Metrics
Web page blacklist discrimination method based on attention mechanism and ensemble learning
ZHOU Chaoran, ZHAO Jianping, MA Tai, ZHOU Xin
Journal of Computer Applications    2021, 41 (1): 133-138.   DOI: 10.11772/j.issn.1001-9081.2020081379
Abstract336)      PDF (1076KB)(408)       Save
As one of the main Internet applications, search engine can retrieve and return effective information from Internet resources according to user needs. However, the obtained returned list often contains noisy information such as advertisements and invalid Web pages, which interfere the user's search and query. Aiming at the complex structural features and rich semantic information of Web pages, a Web page blacklist discrimination method based on attention mechanism and ensemble learning was proposed. And, by using this method, an Ensemble learning and Attention mechanism-based Convolutional Neural Network (EACNN) model was built to filter useless Web pages. First, according to different categories of HTML tag data on Web pages, multiple Convolutional Neural Network (CNN) base learners based on attention mechanism were established. Second, an ensemble learning method based on Web page structural features was used to perform different weight computation to the output results of different base learners to realize the construction of EACNN. Finally, the output result of EACNN was used as the analysis result of Web page content to realize the discrimination of Web page blacklist. The proposed method focuses on the semantic information of Web pages through attention mechanism, and introduces the structural features of Web pages through ensemble learning. Experimental results show that, compared with baseline models such as Support Vector Machine (SVM), K-Nearest Neighbor ( KNN), CNN, Long Short-Term Memory (LSTM) network, Gate Recurrent Unit (GRU) and Attention-based CNN (ACNN), EACNN has the highest accuracy (0.97), recall (0.95) and F 1 score (0.96) on the geographic information field-oriented discrimination dataset constructed. It verifies the advantages of EACNN in the task of discriminating Web page blacklist.
Reference | Related Articles | Metrics
Differential private average publishing of numerical stream data for wearable devices
TU Zixuan, LIU Shubo, XIONG Xingxing, ZHAO Jing, CAI Zhaohui
Journal of Computer Applications    2020, 40 (6): 1692-1697.   DOI: 10.11772/j.issn.1001-9081.2019111929
Abstract321)      PDF (709KB)(322)       Save
User health data such as heart rate and blood glucose generated by wearable devices in real time is of great significance for health monitoring and disease diagnosis. However, health data is private information of users. In order to publish the average value of numerical stream data for wearable devices and prevent the leakage of users’ privacy information, a new differential private average publishing method of wearable devices based on adaptive sampling was proposed. Firstly, the global sensitivity was introduced which was adaptive to the characteristic of small fluctuation of stream data average for wearable devices. Then, the privacy budget was allocated by the adaptive sampling based on Kalman filter error adjustment, so as to improve the availability of the published data. In the experiments of two kinds of health data publishing, while the privacy budget is 0.1, which means that the level of privacy protection is high, the Mean Relative Errors (MRE) of the proposed method on the heart rate dataset and blood glucose dataset are only 0.01 and 0.08, which are 36% and 33% lower than those of Filtering and Adaptive Sampling for differential private Time-series monitoring (FAST) algorithm. The proposed method can improve the usability of wearable devices’ stream data publishing.
Reference | Related Articles | Metrics
Survivable virtual network embedding guarantee mechanism based on software defined network
ZHAO Jihong, WU Doudou, QU Hua, YIN Zhenyu
Journal of Computer Applications    2020, 40 (3): 770-776.   DOI: 10.11772/j.issn.1001-9081.2019071244
Abstract348)      PDF (719KB)(237)       Save
For the virtual network embedding in Software Defined Network (SDN), the existing researchers mainly consider the acceptance rate, but ignore the problem of the underlying resource failure in SDN. Aiming at the problem of Survivable Virtual Network Embedding (SVNE) in SDN, a virtual network embedding guarantee mechanism was proposed combining priori protection and posteriori recovery. Firstly, the regional resources of SDN physical network were perceived before a virtual request was accepted. Secondly, the backup physical resources for the virtual network elements with relatively reduced remaining resources in the mapping domain were reserved by the priori protection mechanism, and the extended virtual network was embedded into the physical network by the D-ViNE (Deterministic Virtual Network Embedding) algorithm. Finally, when a network element without reserved backup resources was out of order, the fault was recovered by the posterior recovery algorithm, which means the node and the link were recovered by remapping and rerouting respectively. Experimental results show that compared with the SDN-Survivability Virtual Network Embedding algorithm (SDN-SVNE), the proposed mechanism has the virtual request acceptance rate increased by 21.9%. And this protection mechanism has advantages in terms such as virtual level fault recovery rate and physical level fault recovery rate.
Reference | Related Articles | Metrics
Virtual field programmable gate array placement strategy based on ant colony optimization algorithm
XU Yingxin, SUN Lei, ZHAO Jiancheng, GUO Songhui
Journal of Computer Applications    2020, 40 (3): 747-752.   DOI: 10.11772/j.issn.1001-9081.2019081359
Abstract359)      PDF (889KB)(403)       Save
To find the optimal deployment of allocating the maximum number of virtual Field Programmable Gate Array (vFPGA) in the minimum number of Field Programmable Gate Array (FPGA) in reconfigurable cryptographic resource pool, the traditional Ant Colony Optimization (ACO) algorithm was optimized, and a vFPGA deployment strategy based on optimized ACO algorithm with considering FPGAs’ characteristics and actual requirements was proposed. Firstly, the load balancing among FPGAs was achieved by giving ants the ability of perceiving resource status, at the same time, the frequent migration of vFPGAs was avoided. Secondly, the free space was designed to effectively reduce the Service Level Agreement (SLA) conflicts caused by dynamical demand change of tenants. Finally, CloudSim toolkit was extended to evaluate the performance of the proposed strategy through simulations on synthetic workflows. Simulation results show that the proposed strategy can reduce the usage number of FPGAs by improving the resource utilization under the promise of guaranteeing the system service quality.
Reference | Related Articles | Metrics
Super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning
SUN Zhongfan, ZHOU Zhenghua, ZHAO Jianwei
Journal of Computer Applications    2020, 40 (12): 3471-3477.   DOI: 10.11772/j.issn.1001-9081.2020060966
Abstract407)      PDF (875KB)(386)       Save
For the problem that the existing deep-learning based super-resolution reconstruction methods mainly study on the reconstruction problem of amplifying integer times, not on the cases of amplifying arbitrary times (e.g. non-integer times), a super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning was proposed. Firstly, the coordinate projection was used to find the correspondence between the coordinates of high-resolution image and low-resolution image. Secondly, based on the meta-learning network, considering the spatial information of feature map, the extracted spatial features and coordinate positions were combined as the input of weighted prediction network. Finally, the convolution kernels predicted by the weighted prediction network were combined with the feature map in order to amplify the size of feature map effectively and obtain the high-resolution image with arbitrary magnification. The proposed spatial meta-learning module was able to be combined with other deep networks to obtain super-resolution reconstruction methods with arbitrary magnification. The provided super-resolution reconstruction method with arbitrary magnification (non-integer magnification) was able to solve the reconstruction problem with a fixed size but non-integer scale in the real life. Experimental results show that, when the space complexity (network parameters) is equivalent, the time complexity (computational cost) of the proposed method is 25%-50% of that of the other reconstruction methods, the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is 0.01-5 dB higher than that of the others, and the Structural Similarity (SSIM) of the proposed method is 0.03-0.11 higher than that of the others.
Reference | Related Articles | Metrics
Dynamic cooperative random drift particle swarm optimization algorithm assisted by evolution information
ZHAO Ji, CHENG Cheng
Journal of Computer Applications    2020, 40 (11): 3119-3126.   DOI: 10.11772/j.issn.1001-9081.2020040481
Abstract365)      PDF (941KB)(510)       Save
A dynamic Cooperative Random Drift Particle Swarm Optimization (CRDPSO) algorithm assisted by evolution information was proposed in order to improve the population diversity of random drift particle swarm optimization. By using the vector information of context particles, the population diversity was increased by the dynamic cooperation between the particles, to improve the search ability of the swarm and make the whole swarm cooperatively search for the global optimum. At the same time, at each iteration during evolution, the positions and the fitness values of the evaluated solutions in the algorithm were stored by a binary space partitioning tree structure archive, which led to the fast fitness function approximation. The mutation was adaptive and nonparametric because of the fitness function approximation enhanced the mutation strategy. CRDPSO algorithm was compared with Differential Evolution (DE), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), continuous Non-revisiting Genetic Algorithm (cNrGA) and three improved Quantum-behaved Particle Swarm Optimization (QPSO) algorithms through a series of standard test functions. Experimental results show that the performance of CRDPSO is optimal for both unimodal and multimodal test functions, which proves the effectiveness of the algorithm.
Reference | Related Articles | Metrics
Analysis of attack events based on multi-source alerts
WANG Chunying, ZHANG Xun, ZHAO Jinxiong, YUAN Hui, LI Fangjun, ZHAO Bo, ZHU Xiaoqin, YANG Fan, LYU Shichao
Journal of Computer Applications    2020, 40 (1): 123-128.   DOI: 10.11772/j.issn.1001-9081.2019071229
Abstract484)      PDF (969KB)(461)       Save
In order to overcome the difficulty in discovering multi-stage attack from multi-source alerts, an algorithm was proposed to mine the attack sequence pattern. The multi-source alerts were normalized into a unified format by matching them with regular expressions. The redundant information of alerts was compressed, and the alerts of the same stage were clustered according to the association rule set trained by strong association rules, efficiently removing the redundant alerts, so that the number of alerts was reduced. Then, the clustered alerts were divided to obtain candidate attack event dataset by sliding-window, and the attack pattern mining algorithm PrefixSpan was used to find out the attack sequence patterns of multi-stage attack events. The experimental results show that the proposed algorithm can lead to an accurate and efficient analysis of alert correlation and extract the attack steps of attack events without expert knowledge. Compared with the traditional algorithm PrefixSpan, the algorithm has an increase in attack pattern mining efficiency of 48.05%.
Reference | Related Articles | Metrics
Lifetime estimation for human motion with WiFi channel state information
LIU Lishuang, WEI Zhongcheng, ZHANG Chunhua, WANG Wei, ZHAO Jijun
Journal of Computer Applications    2019, 39 (7): 2056-2060.   DOI: 10.11772/j.issn.1001-9081.2018122431
Abstract582)      PDF (817KB)(310)       Save

Concerning the poor privacy and flexibility of traditional lifetime estimation for human motion, a lifetime estimation system for human motion was proposed, by analyzing the amplitude variation of WiFi Channel State Information (CSI). In this system, the continuous and complex lifetime estimation problem was transformed into a discrete and simple human motion detection problem. Firstly, the CSI was collected with filtering out the outliers and noise. Secondly, Principal Component Analysis (PCA) was used to reduce the dimension of subcarriers, obtaining the principal components and the corresponding eigenvectors. Thirdly, the variance of principal components and the mean of first difference of eigenvectors were calculated, and a Back Propagation Neural Network (BPNN) model was trained with the ratio of above two parameters as eigenvalue. Fourthly, human motion detection was achieved by the trained BP neural network model, and the CSI data were divided into some segments with equal width when the human motion was detected. Finally, after the human motion detection being performed on all CSI segments, the human motion lifetime was estimated according to the number of CSI segments with human motion detected. In real indoor environment, the average accuracy of human motion detection can reach 97% and the error rate of human motion lifetime is less than 10%. The experimental results show that the proposed system can effectively estimate the lifetime of human motion.

Reference | Related Articles | Metrics
Ship tracking and recognition based on Darknet network and YOLOv3 algorithm
LIU Bo, WANG Shengzheng, ZHAO Jiansen, LI Mingfeng
Journal of Computer Applications    2019, 39 (6): 1663-1668.   DOI: 10.11772/j.issn.1001-9081.2018102190
Abstract1113)      PDF (1018KB)(647)       Save
Aiming at the problems of low utilization rate, high error rate, no recognition ability and manual participation in video surveillance processing in coastal and inland waters of China, a new ship tracking and recognition method based on Darknet network model and YOLOv3 algorithm was proposed to realize ship tracking and real-time detection and recognition of ship types, solving the problem of ship tracking and recognition in important monitored waters. In the Darknet network of the proposed method, the idea of residual network was introduced, the cross-layer jump connection was used to increase the depth of the network, and the ship depth feature matrix was constructed to extract advanced ship features for combination learning and obtaining the ship feature map. On the above basis, YOLOv3 algorithm was introduced to realize target prediction based on image global information, and target region prediction and target class prediction were integrated into a single neural network model. Punishment mechanism was added to improve the ship feature difference between frames. By using logistic regression layer for binary classification prediction, target tracking and recognition was able to be realized quickly with high accuracy. The experimental results show that, the proposed algorithm achieves an average recognition accuracy of 89.5% with the speed of 30 frame/s; compared with traditional and deep learning algorithms, it not only has better real-time performance and accuracy, but also has better robustness to various environmental changes, and can recognize the types and important parts of various ships.
Reference | Related Articles | Metrics
Detection of new ground buildings based on generative adversarial network
WANG Yulong, PU Jun, ZHAO Jianghua, LI Jianhui
Journal of Computer Applications    2019, 39 (5): 1518-1522.   DOI: 10.11772/j.issn.1001-9081.2018102083
Abstract676)      PDF (841KB)(450)       Save
Aiming at the inaccuracy of the methods based on ground textures and space features in detecting new ground buildings, a novel Change Detection model based on Generative Adversarial Networks (CDGAN) was proposed. Firstly, a traditional image segmentation network (U-net) was improved by Focal loss function, and it was used as the Generator (G) of the model to generate the segmentation results of remote sensing images. Then, a convolutional neutral network with 16 layers (VGG-net) was designed as the Discriminator (D), which was used for discriminating the generated results and the Ground Truth (GT) results. Finally, the Generator and Discriminator were trained in an adversarial way to get a Generator with segmentation capability. The experimental results show that, the detection accuracy of CDGAN reaches 92%, and the IU (Intersection over Union) value of the model is 3.7 percentage points higher than that of the traditional U-net model, which proves that the proposed model effectively improves the detection accuracy of new ground buildings in remote sensing images.
Reference | Related Articles | Metrics
Image super-resolution reconstruction based on four-channel convolutional sparse coding
CHEN Chen, ZHAO Jianwei, CAO Feilong
Journal of Computer Applications    2018, 38 (6): 1777-1783.   DOI: 10.11772/j.issn.1001-9081.2017112742
Abstract327)      PDF (1085KB)(304)       Save
In order to solve the problem of low resolution of iamge, a new image super-resolution reconstruction method based on four-channel convolutional sparse coding was proposed. Firstly, the input image was turned over 90° in turn as the input of four channels, and an input image was decomposed into the high frequency part and the low frequency part by low pass filter and gradient operator. Then, the high frequency part and low frequency part of the low resolution image in each channel were reconstructed by convolutional sparse coding method and cubic interpolation method respectively. Finally, the four-channel output images were weighted for mean to obtain the reconstructed high resolution image. The experimental results show that the proposed method has better reconstruction effect than some classical super-resolution methods in Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM) and noise immunity. The proposed method can not only overcome the shortcoming of consistency between image patches destroyed by overlapping patches, but also improve the detail contour of reconstructed image, and enhance the stability of reconstructed image.
Reference | Related Articles | Metrics
High-precision calibration and measurement method based on stereo vision
KONG Yingqiao, ZHAO Jiankang, XIA Xuan
Journal of Computer Applications    2017, 37 (6): 1798-1802.   DOI: 10.11772/j.issn.1001-9081.2017.06.1798
Abstract515)      PDF (757KB)(564)       Save
In stereo vision measurement system, the distortion caused by the optical system makes imaging of target deviate from the theoretical imaging point, which results in measurement error of system. In order to improve the accuracy of the measuring system, a new measurement method based on stereo vision was proposed. Firstly, a quartic polynomial on the whole imaging plane was fitted through the pixel resolution of each angular point on the calibration board, the coefficient of the fitted polynomial was proportional to the distance from the object to the camera. Then, the longitudinal distance of the detected object was measured by using the measuring distance principle of binocular model. Finally, based on the obtained polynomial, the monocular camera was used to measure the transverse dimension of the detected object. The experimental results show that, when the distance between the object and the camera is within 5 m, the longitudinal distance error of the proposed method can be reduced to less than 5%. And when the object is 1 m away from the camera, the measurement error of transverse width of the proposed method is within 0.5 mm, which approaches to the theoretical highest resolution.
Reference | Related Articles | Metrics
Unified algorithm for scattered point cloud denoising and simplification
ZHAO Jingdong, YANG Fenghua, GUO Yingxin
Journal of Computer Applications    2017, 37 (10): 2879-2883.   DOI: 10.11772/j.issn.1001-9081.2017.10.2879
Abstract486)      PDF (864KB)(410)       Save
Since it is difficult to denoise and simplify a three dimensional point cloud data by a same parameter, a new unified algorithm based on the Extended Surface Variation based Local Outlier Factor (ESVLOF) for denoising and simplification of scattered point cloud was proposed. Through the analysis of the definition of ESVLOF, its properties were given. With the help of the surface variability computed in denoising process and the default similarity coefficient, the parameter γ which decreased with the increase of surface variation was constructed. Then the parameter γ was used as local threshold for denoising and simplifying point cloud. The simulation results show that this method can preserve the geometric characteristics of the original data. Compared with traditional 3D point-cloud preprocessing, the efficiency of this method is nearly doubled.
Reference | Related Articles | Metrics
Mean-shift segmentation algorithm based on density revise of saliency
ZHAO Jiangui, SIMA Haifeng
Journal of Computer Applications    2016, 36 (4): 1120-1125.   DOI: 10.11772/j.issn.1001-9081.2016.04.1120
Abstract524)      PDF (1013KB)(397)       Save
To solve the fault segmentation of the mean shift segmentation algorithm based on the fixed space and color bandwidth, a mean-shift segmentation algorithm based on the density revise with saliency feature was proposed. A region saliency computing method was firstly proposed on the basis of density estimation of main color quantization. Secondly, region saliency was fused with pixel level saliency as density modifying factor, and the fused image was modified as input for mean-shift segmentation. Finally, the scatter regions were merged to obtain the final segmentation results. The experimental results show that for the truth boundaries, the average precision and recall of the proposed segmentation algorithm are 0.64 and 0.78 in 4 scales. Compared with other methods, the accuracy of the proposed segmentation method is significantly improved. It can effectively improve the integrity of the target and the robustness of natural color image segmentation.
Reference | Related Articles | Metrics
Fast reconstruction algorithm for photoacoustic computed tomography in vivo
JIANG Zibo, ZHAO Jingxiu, ZHANG Yuanke, MENG Jing
Journal of Computer Applications    2016, 36 (3): 811-814.   DOI: 10.11772/j.issn.1001-9081.2016.03.811
Abstract451)      PDF (602KB)(404)       Save
Focusing on the issue that the data acquisition amount of Photoacoustic Computed Tomography (PACT) based on ultrasonic array is generally huge, and the imaging process is time-consuming, a fast photoacoustic computed tomography method with Principal Component Analysis (PCA) was proposed to extend its applications to the field of hemodynamics. First, the matrix of image samples was constructed with part of full-sampling data. Second, the projection matrix representing the signal features could be derived by the decomposition of the sample matrix. Finally, the high-quality three-dimensional photoacoustic images could be recovered by this projection matrix under three-fold under-sampling. The experimental results on vivo back-vascular imaging of a rat show that, compared to the traditional back-projection method, the data acquisition amount of PACT using PCA can be decreased by about 35%, and the three-dimensional reconstruction speed is improved by about 40%. As a result, both the fast data acquisition and high-accurate image reconstruction are implemented successfully.
Reference | Related Articles | Metrics
Configuration tool design based on control-oriented multi-core real-time operating system
JIANG Jianchun, CHEN Huiling, DENG Lu, ZHAO Jianpeng
Journal of Computer Applications    2016, 36 (3): 765-769.   DOI: 10.11772/j.issn.1001-9081.2016.03.765
Abstract470)      PDF (747KB)(340)       Save
With respect to single-core operating system, multi-core real-time operating system is more functional and complicated. Aiming at the problem that multi-core operating system is difficult for configuration, tailoring and transplantation, a new configuration tool for the application of multi-core real-time operating system, which can improve the application development efficiency of multi-core real-time operating system and reduce the error rate was proposed. First, based on a multi-core real-time operating system named CMOS (Control-oriented Multi-core Operating System), which was independently developed by Chongqing University of Posts and Telecommunications, the configuration tool was hierarchically designed. According to the demand of CMOS, a visualized configuration tool was designed to finish interface generation engine and automatical code generation. Afterwards, in order to ensure the correctness of the configuration logic, configuration correlation detection was proposed. The simulation results show that the CMOS configuration tool is suitable for CMOS operating system because of the short processing time for code generation and low error rate. Compared with the method of troubleshooting by developers manually, correlation detection accelerates the speed of troubleshooting with quickly locating the error code and ensure the generation correctness of configuration file. Thus the configuration tool can promote the application of CMOS multi-core operating system.
Reference | Related Articles | Metrics
Personal relation extraction based on text headline
YAN Yang, ZHAO Jiapeng, LI Quangang, ZHANG Yang, LIU Tingwen, SHI Jinqiao
Journal of Computer Applications    2016, 36 (3): 726-730.   DOI: 10.11772/j.issn.1001-9081.2016.03.726
Abstract761)      PDF (754KB)(720)       Save
In order to overcome the non-person entity's interference, the difficulties in selection of feature words and muti-person influence on target personal relation extraction, this paper proposed person judgment based on decision tree, relation feature word generation based on minimum set cover and statistical approach based on three-layer sentence pattern rules. In the first step, 18 features were extracted from attribute files of China Conference on Machine Learning (CCML) competition 2015, C4.5 decision was used as the classifier, then 98.2% of recall rate and 92.6% of precision rate were acquired. The results of this step were used as the next step's input. Next, the algorithm based on minimum set cover was used. The feature word set covers all the personal relations as the scale of feature word set is maintained at a proper level, which is used to identify the relation type in text headline. In the last step, a method based on statistics of three-layer sentence pattern rules was used to filter small proportion rules and specify the sentence pattern rules based on positive and negative proportions to judge whether the personal relation is correct or not. The experimental result shows the approach acquires 82.9% in recall rate and 74.4% in precision rate and 78.4% in F1-measure, so the proposed method can be applied to personal relation extraction from text headlines, which helps to construct personal relation knowledge graph.
Reference | Related Articles | Metrics
Image denoising algorithm based on sparse representation and nonlocal similarity
ZHAO Jingkun, ZHOU Yingyue, LIN Maosong
Journal of Computer Applications    2016, 36 (2): 551-555.   DOI: 10.11772/j.issn.1001-9081.2016.02.0551
Abstract705)      PDF (1050KB)(954)       Save
For the problem of denoising images corrupted by mixed noise such as Additive White Gaussian Noise (AWGN) with Salt-and-Pepper Impulse Noise (SPIN) and Random-Valued Impulse Noise (RVIN), an improved image restoration algorithm on the basis of the existing weighted encoding method was proposed. The image priors about sparse representation and non-local similarity were integrated. Firstly, the sparse representation based on the dictionary was used to build a variational denoising model and a weighting factor was designed for data fidelity term to suppress impulse noise. Secondly, the method of non-local means was used to get an initialized denoised image and then a mask matrix was built to remove impulse noise points to get the good non-local similarity prior knowledge. Finally, the image sparsity prior and non-local similarity prior were integrated into the regularization of the variational model. The final denoised image was obtained by solving the variational model. The experimental results show that in different noise ratios, the Peak Signal-to-Noise Ratio (PSNR) of the proposed algorithm increased 1.7 dB than that of fuzzy weighted non-local means filter, and the Feature Similarity Index (FSIM) increased 0.06. Compared with weighted encoding method, the PSNR increased 0.64 dB, and the FSIM increased 0.03. The proposed method has better recovery performance especially for the texture strong images and can retain real information of the image.
Reference | Related Articles | Metrics
K-nearest neighbor searching algorithm for laser scattered point cloud
ZHAO Jingdong, YANG Fenghua
Journal of Computer Applications    2016, 36 (10): 2863-2869.   DOI: 10.11772/j.issn.1001-9081.2016.10.2863
Abstract495)      PDF (1113KB)(364)       Save
Aiming at the problem of large amount of data and characteristics of surface in laser scattered point cloud, a K-Nearest Neighbors (KNN) searching algorithm for laser scattered point cloud was put forward to reduce memory usage and improve processing efficiency. Firstly, only the non-empty subspace numbers were stored by multistage classification and dynamic linked list storage. Adjacent subspace was coded in ternary, and the pointer connection between adjacent subspaces was established by dual relationship of code, a generalized table that contained all kinds of required information for KNN searching was constructed, then KNN were searched. In the process of KNN searching, the candidate points outside inscribed sphere of filtration cube were directly deleted when calculating the distance from measured point to candidate points, thus reducing the candidate points that participate in the sort by distance to half. Both dividing principles, whether it relies on K value or not, can be used to calculate different K neighborhoods. Experimental results prove that the proposed algorithm not only has low memory usage, but also has high efficiency.
Reference | Related Articles | Metrics
Testing data generation method based on fireworks explosion optimization algorithm
DING Rui, DONG Hongbin, FENG Xianbin, ZHAO Jiahua
Journal of Computer Applications    2016, 36 (10): 2816-2821.   DOI: 10.11772/j.issn.1001-9081.2016.10.2816
Abstract523)      PDF (969KB)(458)       Save
Aiming at the problem of path coverage test data generation, a new test data generation method based on improved Fireworks Xxplosion Optimization (FXO) algorithm was proposed. First, key-point path method was used to represent the program paths, and the hard-covered paths were defined by the theoretical paths, easy-covered paths and infeasible paths; the easy-covered paths adjacent to the hard-covered paths and their testing data were recorded and used as part of the initial fireworks to improve convergence speed, and the remaining initial fireworks were created randomly. Then according to the individuals' fitness values, an adaptive blast radius was designed to improve convergence rate, and the thought of boundary value test was introduced to modify the border-crossing sparkles. Compared with other seven optimization algorithms that generate testing data, including fireworks explosion optimization with adaptive radius and heuristic information (NFEO), FEO, F-method, NF-method, etc, the simulation results show that the proposed algorithm has lower time complexity of calculating level, and better performance in convergence.
Reference | Related Articles | Metrics
Community detection model in large scale academic social networks
LI Chunying, TANG Yong, TANG Zhikang, HUANG Yonghang, YUAN Chengzhe, ZHAO Jiandong
Journal of Computer Applications    2015, 35 (9): 2565-2568.   DOI: 10.11772/j.issn.1001-9081.2015.09.2565
Abstract533)      PDF (779KB)(391)       Save
Concerning the problem that community detection algorithm based on label propagation in complex networks has a pre-parameter limit in the real network and redundant labels, a community detection model in large scale academic social networks was proposed. The model detected Utmost Maximal Cliques (UMC) in the academic social network and arbitrary intersection between the UMC is the empty set, and then let nodes of each UMC share the unique label by reducing redundant labels and random factors, so the model increased the efficiency and stability of the algorithm. Meanwhile the model completed label propagation of the UMC adjacent nodes using closeness from core node groups (UMC) to spread around, Non-UMC adjacent nodes in the network were updated according to the maximum weight of its neighbor nodes. In the post-processing stage an adaptive threshold method removed useless labels, thereby effectively overcame the pre-parameter limitations in the real complex network. The experimental results on academic social networking platform-SCHOLAT data set prove that the model has an ability to assign nodes with certain generality to the same community, and it provides support of the academic social networks precise personalized service in the future, such as latent friend recommendation and paper sharing.
Reference | Related Articles | Metrics
Polynomial interpolation algorithm framework based on osculating polynomial approximation
ZHAO Xiaole, WU Yadong, ZHANG Hongying, ZHAO Jing
Journal of Computer Applications    2015, 35 (8): 2266-2273.   DOI: 10.11772/j.issn.1001-9081.2015.08.2266
Abstract509)      PDF (1379KB)(349)       Save

Polynomial interpolation technique is a common approximation method in approximation theory, which is widely used in numerical analysis, signal processing, and so on. Traditional polynomial interpolation algorithms are mainly developed by combining numerical analysis with experimental results, lacking of unified theoretical description and regular solution. A uniform theoretical framework for polynomial interpolation algorithm based on osculating polynomial approximation theory was proposed here. Existing interpolation algorithms could be analyzed and new algorithms could be developed under this framework, which consists of the number of sample points, osculating order for sample points and derivative approximation rules. The presentation of existing mainstream interpolation algorithms was analyzed in proposed framework, and the general process for developing new algorithms was shown by using a four-point and two-order osculating polynomial interpolation. Theoretical analysis and numerical experiments show that almost all mainstream polynomial interpolation algorithms belong to osculating polynomial interpolation, and their effects are strongly related to the number of sampling points, order of osculating, and derivative approximation rules.

Reference | Related Articles | Metrics
Particle swarm optimization algorithm using opposition-based learning and adaptive escape
LYU Li, ZHAO Jia, SUN Hui
Journal of Computer Applications    2015, 35 (5): 1336-1341.   DOI: 10.11772/j.issn.1001-9081.2015.05.1336
Abstract575)      PDF (853KB)(945)       Save

To overcome slow convergence velocity of Particle Swarm Optimization (PSO) which falls into local optimum easily, the paper proposed a new approach, a PSO algorithm using opposition-based learning and adaptive escape. The proposed algorithm divided states of population evolution into normal state and premature state by setting threshold. If popolation is in normal state, standard PSO algorithm was adopted to evolve; otherwise, it falls into "premature", the algorithm with opposition-based learning strategy and adaptive escape was adopted, the individual optimal location generates the opposite solution by opposition-based learning, increases the learning ability of particle, enhances the ability to escape from local optimum, and raises the optimizing rate. Experiments were conducted on 8 classical benchmark functions, the experimental results show that the proposed algorithm has better convergence velocity and precision than classical PSO algorithm, such as Fully Imformed Particle Swarm optimization (FIPS), self-organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients (HPSO-TVAC), Comprehensive Learning Particle Swarm Optimizer (CLPSO), Adaptive Particle Swarm Optimization (APSO), Double Center Particle Swarm Optimization (DCPSO) and Particle Swarm Optimization algorithm with Fast convergence and Adaptive escape (FAPSO).

Reference | Related Articles | Metrics
Near outlier detection of scattered point cloud
ZHAO Jingdong, YANG Fenghua, LIU Aijng
Journal of Computer Applications    2015, 35 (4): 1089-1092.   DOI: 10.11772/j.issn.1001-9081.2015.04.1089
Abstract670)      PDF (747KB)(583)       Save

Concerning that the original Surface Variation based Local Outlier Factor (SVLOF) cannot filter out the outliers on edges or corners of three-dimensional solid, a new near outlier detection algorithm of scattered point cloud was proposed. This algorithm firstly defined SVLOF on the k neighborhood-like region, and expanded the definition of SVLOF. The expanded SVLOF can not only filter outliers on smooth surface but also filter outliers on edges or corners of three-dimensional solid. At the same time, it still retains the space of threshold value enough of original SVLOF. The experimental results of the simulation data and measured data show that the new algorithm can detect the near outliers of scattered point cloud effectively without changing the efficiency obviously.

Reference | Related Articles | Metrics
Information extraction of history evolution based on Wikipedia
ZHAO Jiapeng, LIN Min
Journal of Computer Applications    2015, 35 (4): 1021-1025.   DOI: 10.11772/j.issn.1001-9081.2015.04.1021
Abstract502)      PDF (911KB)(611)       Save

The domain concepts are complex, various and hard to capture the development of concepts in software engineering. It's difficult for students to understand and remember. A new effective method which extracts the historical evolution information on software engineering was proposed. Firstly, the candidate sets included entities and entity relationships from Wikipedia were extracted with the Nature Language Processing (NLP) and information extraction technology. Secondly, the entity relationships which being closest to historical evolution from the candidate sets were extracted using TextRank; Finally, the knowledge base was constructed by quintuples composed of the neighboring time entities and concept entities with concerning the key entity relationship. In the process of information extraction, TextRank algorithm was improved based on the text semantic features to increase the accuracy rate. The results verify the effectiveness of the proposed algorithm, and the knowledge base can organize the concepts in software engineering field together according to the characteristics of time sequence.

Reference | Related Articles | Metrics
Bridge crack measurement system based on binocular stereo vision technology
WANG Lin, ZHAO Jiankang, XIA Xuan, LONG Haihui
Journal of Computer Applications    2015, 35 (3): 901-904.   DOI: 10.11772/j.issn.1001-9081.2015.03.901
Abstract853)      PDF (624KB)(621)       Save

A bridge crack measurement system based on binocular stereo vision technology was proposed considering the low efficiency, high cost and low precision of bridge cracks measurement at home and abroad. The system realized by using some binocular stereo vision methods like camera calibration, image matching and three dimensional coordinates reconstruction to calculate the width and the length of bridge cracks. The measured results by binocular vision and by monocular vision system under the same conditions were compared, which show that using the binocular vision measurement system made width relative error keep within 10% and length relative error keep within 1% steadily, while the results measured by monocular vision were changed widely in different angles with a maximum width relative error 19.41% and a maximum length relative error 54.35%. The bridge crack measurement system based on binocular stereo vision can be used in practical well with stronger robustness and higher precision.

Reference | Related Articles | Metrics